15 research outputs found

    Biologically plausible deep learning -- but how far can we go with shallow networks?

    Get PDF
    Training deep neural networks with the error backpropagation algorithm is considered implausible from a biological perspective. Numerous recent publications suggest elaborate models for biologically plausible variants of deep learning, typically defining success as reaching around 98% test accuracy on the MNIST data set. Here, we investigate how far we can go on digit (MNIST) and object (CIFAR10) classification with biologically plausible, local learning rules in a network with one hidden layer and a single readout layer. The hidden layer weights are either fixed (random or random Gabor filters) or trained with unsupervised methods (PCA, ICA or Sparse Coding) that can be implemented by local learning rules. The readout layer is trained with a supervised, local learning rule. We first implement these models with rate neurons. This comparison reveals, first, that unsupervised learning does not lead to better performance than fixed random projections or Gabor filters for large hidden layers. Second, networks with localized receptive fields perform significantly better than networks with all-to-all connectivity and can reach backpropagation performance on MNIST. We then implement two of the networks - fixed, localized, random & random Gabor filters in the hidden layer - with spiking leaky integrate-and-fire neurons and spike timing dependent plasticity to train the readout layer. These spiking models achieve > 98.2% test accuracy on MNIST, which is close to the performance of rate networks with one hidden layer trained with backpropagation. The performance of our shallow network models is comparable to most current biologically plausible models of deep learning. Furthermore, our results with a shallow spiking network provide an important reference and suggest the use of datasets other than MNIST for testing the performance of future models of biologically plausible deep learning.Comment: 14 pages, 4 figure

    Mermin-Wagner fluctuations in 2D amorphous solids

    Full text link
    In a recent comment, M. Kosterlitz described how the discrepancy about the lack of broken translational symmetry in two dimensions - doubting the existence of 2D crystals - and the first computer simulations foretelling 2D crystals at least in tiny systems, motivated him and D. Thouless to investigate melting and suprafluidity in two dimensions [Jour. of Phys. Cond. Matt. \textbf{28}, 481001 (2016)]. The lack of broken symmetries proposed by D. Mermin and H. Wagner is caused by long wavelength density fluctuations. Those fluctuations do not only have structural impact but additionally a dynamical one: They cause the Lindemann criterion to fail in 2D and the mean squared displacement not to be limited. Comparing experimental data from 3D and 2D amorphous solids with 2D crystals we disentangle Mermin-Wagner fluctuations from glassy structural relaxations. Furthermore we can demonstrate with computer simulations the logarithmic increase of displacements predicted by Mermin and Wagner: periodicity is not a requirement for Mermin-Wagner fluctuations which conserve the homogeneity of space on long scales.Comment: 7 pages, 4 figure

    Hierarchical Salient Object Detection for Assisted Grasping

    Full text link
    Visual scene decomposition into semantic entities is one of the major challenges when creating a reliable object grasping system. Recently, we introduced a bottom-up hierarchical clustering approach which is able to segment objects and parts in a scene. In this paper, we introduce a transform from such a segmentation into a corresponding, hierarchical saliency function. In comprehensive experiments we demonstrate its ability to detect salient objects in a scene. Furthermore, this hierarchical saliency defines a most salient corresponding region (scale) for every point in an image. Based on this, an easy-to-use pick and place manipulation system was developed and tested exemplarily.Comment: Accepted for ICRA 201

    NMDA-driven dendritic modulation enables multitask representation learning in hierarchical sensory processing pathways.

    Get PDF
    While sensory representations in the brain depend on context, it remains unclear how such modulations are implemented at the biophysical level, and how processing layers further in the hierarchy can extract useful features for each possible contextual state. Here, we demonstrate that dendritic N-Methyl-D-Aspartate spikes can, within physiological constraints, implement contextual modulation of feedforward processing. Such neuron-specific modulations exploit prior knowledge, encoded in stable feedforward weights, to achieve transfer learning across contexts. In a network of biophysically realistic neuron models with context-independent feedforward weights, we show that modulatory inputs to dendritic branches can solve linearly nonseparable learning problems with a Hebbian, error-modulated learning rule. We also demonstrate that local prediction of whether representations originate either from different inputs, or from different contextual modulations of the same input, results in representation learning of hierarchical feedforward weights across processing layers that accommodate a multitude of contexts

    Dendritic modulation for multitask representation learning in deep feedforward networks

    No full text
    Feedforward sensory processing in the brain is generally construed as proceeding through a hierar- chy of layers, each constructing increasingly abstract and invariant representations of sensory inputs. This interpretation is at odds with the observation that activity in sensory processing layers is heavily modulated by contextual signals, such as cross modal information or internal mental states [1]. While it is tempting to assume that such modulations bias the feedforward processing pathway towards de- tection of relevant input features given a context, this induces a dependence on the contextual state in hidden representations at any given layer. The next processing layer in the hierarchy thus has to be able to extract relevant information for each possible context. For this reason, most machine learning approaches to multitask learning apply task-specific output networks to context-independent representations of the inputs, generated by a shared trunk network.Here, we show that a network motif, where a layer of modulated hidden neurons targets an out- put neuron through task-independent feedforward weights, solves multitask learning problems, and that this network motif can be implemented with biophysically realistic neurons that receive context- modulating synaptic inputs on dendritic branches. The dendritic synapses in this motif evolve ac- cording to a Hebbian plasticity rule modulated by a global error signal. We then embed such a motif in each layer of a deep feedforward network, where it generates task-modulated representations of sensory inputs. To learn feedforward weights to the next layer in the network, we apply a contrastive learning objective that predicts whether representations originate either from different inputs, or from different task-modulations of the same input. This self-supervised approach results in deep represen- tation learning of feedforward weights that accommodate a multitude of contexts, without relying on error backpropagation between layers

    NMDA-driven dendritic modulation enables multitask representation learning in hierarchical sensory processing pathways

    No full text
    While sensory representations in the brain depend on context, it remains unclearhow such modulations are implemented at the biophysical level, and how processinglayers further in the hierarchy can extract useful features for each possible contex-tual state. Here, we demonstrate that dendritic N-Methyl-D-Aspartate spikes can,within physiological constraints, implement contextual modulation of feedforwardprocessing. Such neuron-specific modulations exploit prior knowledge, encoded instable feedforward weights, to achieve transfer learning across contexts. In a network ofbiophysically realistic neuron models with context-independent feedforward weights,we show that modulatory inputs to dendritic branches can solve linearly nonseparablelearning problems with a Hebbian, error-modulated learning rule. We also demonstratethat local prediction of whether representations originate either from different inputs,or from different contextual modulations of the same input, results in representationlearning of hierarchical feedforward weights across processing layers that accommodatea multitude of contexts
    corecore